27 research outputs found

    Filtering data from the collaborative initial glaucoma treatment study for improved identification of glaucoma progression

    Get PDF
    Abstract Background Open-angle glaucoma (OAG) is a prevalent, degenerate ocular disease which can lead to blindness without proper clinical management. The tests used to assess disease progression are susceptible to process and measurement noise. The aim of this study was to develop a methodology which accounts for the inherent noise in the data and improve significant disease progression identification. Methods Longitudinal observations from the Collaborative Initial Glaucoma Treatment Study (CIGTS) were used to parameterize and validate a Kalman filter model and logistic regression function. The Kalman filter estimates the true value of biomarkers associated with OAG and forecasts future values of these variables. We develop two logistic regression models via generalized estimating equations (GEE) for calculating the probability of experiencing significant OAG progression: one model based on the raw measurements from CIGTS and another model based on the Kalman filter estimates of the CIGTS data. Receiver operating characteristic (ROC) curves and associated area under the ROC curve (AUC) estimates are calculated using cross-fold validation. Results The logistic regression model developed using Kalman filter estimates as data input achieves higher sensitivity and specificity than the model developed using raw measurements. The mean AUC for the Kalman filter-based model is 0.961 while the mean AUC for the raw measurements model is 0.889. Hence, using the probability function generated via Kalman filter estimates and GEE for logistic regression, we are able to more accurately classify patients and instances as experiencing significant OAG progression. Conclusion A Kalman filter approach for estimating the true value of OAG biomarkers resulted in data input which improved the accuracy of a logistic regression classification model compared to a model using raw measurements as input. This methodology accounts for process and measurement noise to enable improved discrimination between progression and nonprogression in chronic diseases.http://deepblue.lib.umich.edu/bitstream/2027.42/109450/1/12911_2013_Article_773.pd

    Predicting Emergency Department Volume Using Forecasting Methods to Create a “Surge Response” for Noncrisis Events

    Full text link
    Objectives:  This study investigated whether emergency department (ED) variables could be used in mathematical models to predict a future surge in ED volume based on recent levels of use of physician capacity. The models may be used to guide decisions related to on‐call staffing in non–crisis‐related surges of patient volume. Methods:  A retrospective analysis was conducted using information spanning July 2009 through June 2010 from a large urban teaching hospital with a Level I trauma center. A comparison of significance was used to assess the impact of multiple patient‐specific variables on the state of the ED. Physician capacity was modeled based on historical physician treatment capacity and productivity. Binary logistic regression analysis was used to determine the probability that the available physician capacity would be sufficient to treat all patients forecasted to arrive in the next time period. The prediction horizons used were 15 minutes, 30 minutes, 1 hour, 2 hours, 4 hours, 8 hours, and 12 hours. Five consecutive months of patient data from July 2010 through November 2010, similar to the data used to generate the models, was used to validate the models. Positive predictive values, Type I and Type II errors, and real‐time accuracy in predicting noncrisis surge events were used to evaluate the forecast accuracy of the models. Results:  The ratio of new patients requiring treatment over total physician capacity (termed the care utilization ratio [CUR]) was deemed a robust predictor of the state of the ED (with a CUR greater than 1 indicating that the physician capacity would not be sufficient to treat all patients forecasted to arrive). Prediction intervals of 30 minutes, 8 hours, and 12 hours performed best of all models analyzed, with deviances of 1.000, 0.951, and 0.864, respectively. A 95% significance was used to validate the models against the July 2010 through November 2010 data set. Positive predictive values ranged from 0.738 to 0.872, true positives ranged from 74% to 94%, and true negatives ranged from 70% to 90% depending on the threshold used to determine the state of the ED with the 30‐minute prediction model. Conclusions:  The CUR is a new and robust indicator of an ED system’s performance. The study was able to model the tradeoff of longer time to response versus shorter but more accurate predictions, by investigating different prediction intervals. Current practice would have been improved by using the proposed models and would have identified the surge in patient volume earlier on noncrisis days.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/92015/1/j.1553-2712.2012.01359.x.pd

    Missed Opportunities in Preventing Hospital Readmissions: Redesigning Post‐Discharge Checkup Policies

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/147049/1/poms12858.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/147049/2/poms12858_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/147049/3/poms12858-sup-0001-AppendixS1.pd

    Strategic Health Workforce Planning

    No full text
    Abstract Analysts predict impending shortages in the health care workforce, yet wages for health care workers already account for over half of U.S. health expenditures. It is thus increasingly important to adequately plan to meet health workforce demand at reasonable cost. Using infinite linear programming methodology, we propose an infinite-horizon model for health workforce planning in a large health system for a single worker class, e.g. nurses. We give a series of common-sense conditions any system of this kind should satisfy, and use them to prove the optimality of a natural lookahead policy. We then use real-world data to examine how such policies perform in more complex systems with additional detail

    Introduction to Metamodeling for Reducing Computational Burden of Advanced Analyses with Health Economic Models: A Structured Overview of Metamodeling Methods in a 6-Step Application Process

    Get PDF
    Metamodels can be used to reduce the computational burden associated with computationally demanding analyses of simulation models, although applications within health economics are still scarce. Besides a lack of awareness of their potential within health economics, the absence of guidance on the conceivably complex and time-consuming process of developing and validating metamodels may contribute to their limited uptake. To address these issues, this article introduces metamodeling to the wider health economic audience and presents a process for applying metamodeling in this context, including suitable methods and directions for their selection and use. General (i.e., non–health economic specific) metamodeling literature, clinical prediction modeling literature, and a previously published literature review were exploited to consolidate a process and to identify candidate metamodeling methods. Methods were considered applicable to health economics if they are able to account for mixed (i.e., continuous and discrete) input parameters and continuous outcomes. Six steps were identified as relevant for applying metamodeling methods within health economics: 1) the identification of a suitable metamodeling technique, 2) simulation of data sets according to a design of experiments, 3) fitting of the metamodel, 4) assessment of metamodel performance, 5) conducting the required analysis using the metamodel, and 6) verification of the results. Different methods are discussed to support each step, including their characteristics, directions for use, key references, and relevant R and Python packages. To address challenges regarding metamodeling methods selection, a first guide was developed toward using metamodels to reduce the computational burden of analyses of health economic models. This guidance may increase applications of metamodeling in health economics, enabling increased use of state-of-the-art analyses (e.g., value of information analysis) with computationally burdensome simulation models

    Optimal Screening for Hepatocellular Carcinoma: A Restless Bandit Model

    No full text

    Strategic health workforce planning

    No full text
    <p>Analysts predict impending shortages in the health care workforce, yet wages for health care workers already account for over half of U.S. health expenditures. It is thus increasingly important to adequately plan to meet health workforce demand at reasonable cost. Using infinite linear programming methodology, we propose an infinite-horizon model for health workforce planning in a large health system for a single worker class; e.g., nurses. We give a series of common-sense conditions that any system of this kind should satisfy and use them to prove the optimality of a natural lookahead policy. We then use real-world data to examine how such policies perform in more complex systems; in particular, our experiments show that a natural extension of the lookahead policy performs well when incorporating stochastic demand growth.</p

    Implications of True and Perceived Treatment Burden on Cardiovascular Medication Use

    No full text
    Background: Clinical decisions require weighing possible risks and benefits, which are often based on the provider’s sense of treatment burden. Patients often have a different view of how heavily treatment burden should be weighted. Objective: To examine how much small variations in patient treatment burden would influence optimal use of antihypertensive medications and how much over- and undertreatment can result from clinicians misunderstanding their patients’ values. Methods: Analysis—Markov chain model. Data sources—Existing literature, including an individual-level meta-analysis of blood pressure trials. Target population—US representative sample, ages 40 to 85, no history of cardiovascular disease. Time horizon—Effect of 10 years of treatment on estimated lifetime quality-adjusted life-year (QALY) burden. Perspective—Patient. Outcome measures: QALYs gained by treatment. Results: Fairly small differences in true patient burden from blood pressure treatment alter the number of blood pressure medications that should be recommended and alters treatment’s potential benefit dramatically. We also found that a clinician misunderstanding the patient’s burden could lead to almost 30% of patients being treated inappropriately. Limitations: Our results are based on simulation modeling. Conclusions: Clinical decisions that fail to account for patient treatment burden can mistreat a very large proportion of the public. Successful treatment choices closely depend on a clinician’s ability to accurately gauge a patient’s treatment burden
    corecore